科学数据的一套简洁且可衡量的公平(可访问,可互操作和可重复使用的)原则正在转变用于数据管理和管理的最新实践,以支持和支持发现和创新。从这项计划中学习,并承认人工智能(AI)在科学和工程实践中的影响,我们为AI模型引入了一套实用,简洁和可衡量的公平原则。我们展示了如何在统一的计算框架内创建和共享公平的数据和AI模型,结合了以下要素:Argonne国家实验室的高级光子源,材料数据设施,科学数据和学习中心,Funcx和Argonne Leadersition的数据和学习中心计算设施(ALCF),尤其是ALCF AI测试台的Thetagpu SuperCuputer和Sambanova Datascale系统。我们描述了如何利用这种域 - 不足的计算框架来实现自主AI驱动的发现。
translated by 谷歌翻译
This paper proposes a novel self-supervised based Cut-and-Paste GAN to perform foreground object segmentation and generate realistic composite images without manual annotations. We accomplish this goal by a simple yet effective self-supervised approach coupled with the U-Net based discriminator. The proposed method extends the ability of the standard discriminators to learn not only the global data representations via classification (real/fake) but also learn semantic and structural information through pseudo labels created using the self-supervised task. The proposed method empowers the generator to create meaningful masks by forcing it to learn informative per-pixel as well as global image feedback from the discriminator. Our experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on the standard benchmark datasets.
translated by 谷歌翻译
Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large version, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering. We then test this notion by observing a model's behavior on answering questions about a story after performing two novel semantic interventions -- deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (~50% for deletion intervention, and ~20% drop in accuracy for negation intervention). We then propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from ~50% to ~6%). We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models' inability to deal with negation intervention or to capture the predicate-argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate-argument structure. While InstructGPT models do achieve very high performance on predicate-argument structure task, they fail to respond adequately to our deletion and negation interventions.
translated by 谷歌翻译
Large pre-trained language models have recently enabled open-ended generation frameworks (e.g., prompt-to-text NLG) to tackle a variety of tasks going beyond the traditional data-to-text generation. While this framework is more general, it is under-specified and often leads to a lack of controllability restricting their real-world usage. We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages. To address this task, we introduce a new dataset, called EntDeGen. Inspired by recent QA-based evaluation measures, we propose an automatic metric, MAFE, for factual correctness of generated descriptions. Our EntDescriptor model is equipped with strong rankers to fetch helpful passages and generate entity descriptions. Experimental result shows a good correlation (60.14) between our proposed metric and human judgments of factuality. Our rankers significantly improved the factual correctness of generated descriptions (15.95% and 34.51% relative gains in recall and precision). Finally, our ablation study highlights the benefit of combining keys and groundings.
translated by 谷歌翻译
Narrative summarization aims to produce a distilled version of a narrative to describe its most salient events and characters. Summarizing a narrative is challenging as it requires an understanding of event causality and character behaviors. To encourage research in this direction, we propose NarraSum, a large-scale narrative summarization dataset. It contains 122K narrative documents, which are collected from plot descriptions of movies and TV episodes with diverse genres, and their corresponding abstractive summaries. Experiments show that there is a large performance gap between humans and the state-of-the-art summarization models on NarraSum. We hope that this dataset will promote future research in summarization, as well as broader studies of natural language understanding and generation. The dataset is available at https://github.com/zhaochaocs/narrasum.
translated by 谷歌翻译
意见摘要是创建摘要的任务,以获取用户评论中的流行意见。在本文中,我们介绍了Geodesic Summarizer(GeoSumm),这是一种新型系统,可执行无监督的提取意见摘要。 GeoSumm涉及基于编码器的表示模型,该模型将文本表示为潜在语义单元的分布。 GeoSumm通过在多个解码器层上对预训练的文本表示进行字典学习来生成这些表示。然后,我们使用这些表示形式使用新型的基于测量距离的评分机制来量化审查句子的相关性。我们使用相关得分来确定流行意见,以构成一般和特定方面的摘要。我们提出的模型GeoSumm在三个意见摘要数据集上实现了最先进的性能。我们执行其他实验来分析模型的功能,并展示跨不同域{\ x}的概括能力。
translated by 谷歌翻译
许多关于缩放定律的研究考虑了基本因素,例如模型大小,模型形状,数据集大小和计算功率。这些因素很容易调整,代表了任何机器学习设置的基本要素。但是研究人员还采用了更复杂的因素来估计测试误差和概括性能,并具有高可预测性。这些因素通常针对域或应用。例如,特征多样性主要用于Chen等人促进SYN到真实传递。 (2021)。由于以前的作品中定义了许多缩放因素,研究这些因素如何在使用CNN模型的自我监督学习的背景下如何影响整体概括性能。个体因素如何促进概括,其中包括不同的深度,宽度或早期停止的训练时期的数量?例如,较高的特征多样性是否导致在SYN到真实传输以外的复杂环境中保持较高的精度?这些因素如何互相取决于彼此?我们发现最后一层是整个培训中最多样化的。但是,尽管模型的测试误差随着时代的增加而减少,但其多样性下降。我们还发现多样性与模型宽度直接相关。
translated by 谷歌翻译
机器学习系统通常被部署用于做出关键决策,例如信用贷款,招聘等。在做出决策时,此类系统通常会在其中间表示中对用户的人口统计信息(例如性别,年龄)进行编码。这可能会导致对特定人口统计的决定。先前的工作集中在中间表示方面,以确保公正的决策。但是,随着任务或人口统计分布的变化,这些方法无法保持公平。为了确保野外的公平性,对于系统来说,适应以渐进方式访问新数据的更改很重要。在这项工作中,我们建议通过在渐进学习环境中介绍学习公平表示的问题来解决此问题。为此,我们介绍了公平意识的增量表示学习(FAIRL),这是一种代表学习系统,可以维持公平,同时逐步学习新任务。 Fairl能够通过控制学习表示的速度延伸功能来实现公平和学习新任务。我们的经验评估表明,Fairl能够在目标任务上实现高性能的同时做出公正的决定,表现优于几个基线。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
近年来,在数字病理应用中,在研究和临床环境中越来越普遍的部署这些模型的部署证明了在数字病理应用中的深度学习模型的开发方面取得了巨大进步。尽管此类模型在解决DP应用程序中的基本计算任务方面表现出了前所未有的表现,但在适应转移学习的看不见数据时,它们会遭受灾难性的遗忘。随着对深度学习模型的需求越来越多地处理不断变化的数据分布,包括不断发展的患者人群和新的诊断测定法,持续的学习模型减轻了模型忘记的遗忘,需要在基于DP的分析中引入。但是,据我们所知,没有针对DP特定应用的此类模型的系统研究。在这里,我们提出了DP设置中的CL方案,其中的组织病理学图像数据来自不同来源/分布,其知识已集成到单个模型中,而无需从头开始训练所有数据。然后,我们建立了一个用于结直肠癌H&E分类的增强数据集,以模拟图像外观的变化,并在拟议的CL方案中评估了CL模型性能。我们利用乳腺肿瘤H&E数据集以及结直肠癌来评估不同肿瘤类型的CL。此外,我们在注释和计算资源的限制下在在线几弹性设置中评估了CL方法。我们揭示了DP应用中CL的有希望的结果,这可能为这些方法在临床实践中的应用铺平了道路。
translated by 谷歌翻译